Chris Pollett > Old Classses >
CS156

( Print View )

Student Corner:
  [Grades Sec1]

  [Submit Sec1]

  [
Lecture Notes]

  [Discussion Board]

Course Info:
  [Texts & Links]
  [Topics/Outcomes]
  [Outcomes Matrix]
  [Grading]
  [HW/Quiz Info]
  [Exam Info]
  [Regrades]
  [Honesty]
  [Additional Policies]
  [Announcements]

HWs and Quizzes:
  [Hw1]  [Hw2]  [Hw3]
  [Hw4]  [Hw5]  [Quizzes]

Practice Exams:
  [Mid]  [Final]

                           












HW#5 --- last modified February 06 2019 04:18:19..

Solution set.

Due date: May 13

Files to be submitted:
  Hw5.zip

Purpose: To learn about default reasoning. To understand the computations a probabilistic agent might do. To gain experience with learning algorithms.

Related Course Outcomes:

The main course outcomes covered by this assignment are:

LO11 -- Students should be able to describe default reasoning.

LO12 -- Students should be able to describe or implement at least one learning algorithm.

Specification:

This homework will consist of the questions below, whose answers should be in Hw5.pdf in your submitted Hw5.zip folder, as well as, the coding exercise beneath them.

  1. Give an example set of default rules that has at least four different extensions.
  2. Consider the following partially explored Wumpus World (ignore how we ended up with a world explored in this fashion):
         
       
    BB,S  
    XB 
    Here `X` indicates explored but nothing detected; `B` indicates a breeze was felt on that square; `S` indicates a stench was smelt. For each square on the frontier use the method from the Nov 19 Lecture to get a probability that the given square is safe. Which square should a rational agent who must find the gold search next?
  3. Consider the following table of data concerning whether or not a person is a pirate:
    Beard-TypeTeeth ColorTattoosHas EarringIs Pirate
    GoateeYellowYesNoNo
    NoneWhiteYesYesYes
    UnkemptYellowNoYesYes
    Work out by hand (show work) the decision tree that our decision tree learning algorithm would compute for the above training set.

For the coding portion of the homework I'd like you to code the perceptron learning algorithm discussed in class and use it to learn the concept of mostly X's versus mostly O's grid of squares. Here we imagine inputs will be a 3x3 grid of squares each square being filled with either an X or O. Your program will be run from the command line with a line like:

python xo_learner.py training_file_name.txt test_file_name.txt
training_file_name.txt is the name of a file containing the training set for the algorithm, test_file_name.txt is the name of a file containing a grid which we want to classify as mostly X'd or mostly O'd. Your program on such an input can output whatever diagnostic messages you deem useful, however, the last line it outputs should be either the single word MOSTLY X's (if the grid) or MOSTLY O's (if its not connected). It should produce this output by first training a perceptron using training_file_name.txt and then using the trained perceptron to figure out what should be the output based on the input grid in test_file_name.txt. The file format for the training set is as follows is a sequence of 3x3 grids, followed by a classification line, followed by a blank line. The classification line contains either an X (meaning the example was mostly X's) or an O (meaning mostly O's). As example, a file might contain:

XXO
OXO
OOO
O

XXX
XXX
XX0
X

OXO
OOO
XOO
O

The file test_file_name.txt consists of just a single grid of X's and O's. For example,

OOO
OXX
OOO
As part of your project you should conduct some experiments by randomly generating training sets of different sizes and experimentally determining successful classification percentages for each of these sizes. Write up these experiments in the file Experiments.pdf which you also include with your project.

Point Breakdown

Written problems (2pts each - each documented flaw 1/2pt off). 6pts
PEP 8 coding guidelines followed 0.5pts
xo_learner.py reads in files correctly and outputs MOSTLY X's or MOSTLY O's 0.5pts
xo_learner.py implements the perceptron algorithm described in class. 2pts
Experiments.pdf contains a reasonable write-up of experiments conducted. 1pt